18 research outputs found

    Opportunities of IoT in Fog Computing for High Fault Tolerance and Sustainable Energy Optimization

    Get PDF
    Today, the importance of enhanced quality of service and energy optimization has promoted research into sensor applications such as pervasive health monitoring, distributed computing, etc. In general, the resulting sensor data are stored on the cloud server for future processing. For this purpose, recently, the use of fog computing from a real-world perspective has emerged, utilizing end-user nodes and neighboring edge devices to perform computation and communication. This paper aims to develop a quality-of-service-based energy optimization (QoS-EO) scheme for the wireless sensor environments deployed in fog computing. The fog nodes deployed in specific geographical areas cover the sensor activity performed in those areas. The logical situation of the entire system is informed by the fog nodes, as portrayed. The implemented techniques enable services in a fog-collaborated WSN environment. Thus, the proposed scheme performs quality-of-service placement and optimizes the network energy. The results show a maximum turnaround time of 8 ms, a minimum turnaround time of 1 ms, and an average turnaround time of 3 ms. The costs that were calculated indicate that as the number of iterations increases, the path cost value decreases, demonstrating the efficacy of the proposed technique. The CPU execution delay was reduced to a minimum of 0.06 s. In comparison, the proposed QoS-EO scheme has a lower network usage of 611,643.3 and a lower execution cost of 83,142.2. Thus, the results show the best cost estimation, reliability, and performance of data transfer in a short time, showing a high level of network availability, throughput, and performance guarantee

    An efficient algorithm for data parallelism based on stochastic optimization

    Get PDF
    Deep neural network models can achieve greater performance in numerous machine learning tasks by raising the depth of the model and the amount of training data samples. However, these essential procedures will proportionally raise the cost of training deep neural network models. Accelerating the training process of deep neural network models in a distributed computing environment has become the most often utilized strategy for developers in order to better cope with a huge quantity of training overhead. The current deep neural network model is the stochastic gradient descent (SGD) technique. It is one of the most widely used training techniques in network models, although it is prone to gradient obsolescence during parallelization, which impacts the overall convergence. The majority of present solutions are geared at high-performance nodes with minor performance changes. Few studies have taken into account the cluster environment in high-performance computing (HPC), where the performance of each node varies substantially. A dynamic batch size stochastic gradient descent approach based on performance-aware technology is suggested to address the aforesaid difficulties (DBS-SGD). By assessing the processing capacity of each node, this method dynamically allocates the minibatch of each node, guaranteeing that the update time of each iteration between nodes is essentially the same, lowering the average gradient of the node. The suggested approach may successfully solve the asynchronous update strategy’s gradient outdated problem. The Mnist and cifar10 are two widely used image classification benchmarks, that are employed as training data sets, and the approach is compared with the asynchronous stochastic gradient descent (ASGD) technique. The experimental findings demonstrate that the proposed algorithm has better performance as compared with existing algorithms

    NIPUNA: A Novel Optimizer Activation Function for Deep Neural Networks

    Get PDF
    In recent years, various deep neural networks with different learning paradigms have been widely employed in various applications, including medical diagnosis, image analysis, self-driving vehicles and others. The activation functions employed in deep neural networks have a huge impact on the training model and the reliability of the model. The Rectified Linear Unit (ReLU) has recently emerged as the most popular and extensively utilized activation function. ReLU has some flaws, such as the fact that it is only active when the units are positive during back-propagation and zero otherwise. This causes neurons to die (dying ReLU) and a shift in bias. However, unlike ReLU activation functions, Swish activation functions do not remain stable or move in a single direction. This research proposes a new activation function named NIPUNA for deep neural networks. We test this activation by training on customized convolutional neural networks (CCNN). On benchmark datasets (Fashion MNIST images of clothes, MNIST dataset of handwritten digits), the contributions are examined and compared to various activation functions. The proposed activation function can outperform traditional activation functions

    Chaos Embed Marine Predator (CMPA) Algorithm for Feature Selection

    Get PDF
    Data mining applications are growing with the availability of large data; sometimes, handling large data is also a typical task. Segregation of the data for extracting useful information is inevitable for designing modern technologies. Considering this fact, the work proposes a chaos embed marine predator algorithm (CMPA) for feature selection. The optimization routine is designed with the aim of maximizing the classification accuracy with the optimal number of features selected. The well-known benchmark data sets have been chosen for validating the performance of the proposed algorithm. A comparative analysis of the performance with some well-known algorithms advocates the applicability of the proposed algorithm. Further, the analysis has been extended to some of the well-known chaotic algorithms; first, the binary versions of these algorithms are developed and then the comparative analysis of the performance has been conducted on the basis of mean features selected, classification accuracy obtained and fitness function values. Statistical significance tests have also been conducted to establish the significance of the proposed algorithm

    A Multi–Objective Gaining–Sharing Knowledge-Based Optimization Algorithm for Solving Engineering Problems

    Get PDF
    Metaheuristics in recent years has proven its effectiveness; however, robust algorithms that can solve real-world problems are always needed. In this paper, we suggest the first extended version of the recently introduced gaining–sharing knowledge optimization (GSK) algorithm, named multiobjective gaining–sharing knowledge optimization (MOGSK), to deal with multiobjective optimization problems (MOPs). MOGSK employs an external archive population to store the nondominated solutions generated thus far, with the aim of guiding the solutions during the exploration process. Furthermore, fast nondominated sorting with crowding distance was incorporated to sustain the diversity of the solutions and ensure the convergence towards the Pareto optimal set, while the e- dominance relation was used to update the archive population solutions. e-dominance helps provide a good boost to diversity, coverage, and convergence overall. The validation of the proposed MOGSK was conducted using five biobjective (ZDT) and seven three-objective test functions (DTLZ) problems, along with the recently introduced CEC 2021, with fifty-five test problems in total, including power electronics, process design and synthesis, mechanical design, chemical engineering, and power system optimization. The proposed MOGSK was compared with seven existing optimization algorithms, including MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA. The experimental findings show the good behavior of our proposed MOGSK against the comparative algorithms in particular real-world optimization problems

    Interpretable Deep Learning for Discriminating Pneumonia from Lung Ultrasounds

    Get PDF
    Lung ultrasound images have shown great promise to be an operative point-of-care test for the diagnosis of COVID-19 because of the ease of procedure with negligible individual protection equipment, together with relaxed disinfection. Deep learning (DL) is a robust tool for modeling infection patterns from medical images; however, the existing COVID-19 detection models are complex and thereby are hard to deploy in frequently used mobile platforms in point-of-care testing. Moreover, most of the COVID-19 detection models in the existing literature on DL are implemented as a black box, hence, they are hard to be interpreted or trusted by the healthcare community. This paper presents a novel interpretable DL framework discriminating COVID-19 infection from other cases of pneumonia and normal cases using ultrasound data of patients. In the proposed framework, novel transformer modules are introduced to model the pathological information from ultrasound frames using an improved window-based multi-head self-attention layer. A convolutional patching module is introduced to transform input frames into latent space rather than partitioning input into patches. A weighted pooling module is presented to score the embeddings of the disease representations obtained from the transformer modules to attend to information that is most valuable for the screening decision. Experimental analysis of the public three-class lung ultrasound dataset (PCUS dataset) demonstrates the discriminative power (Accuracy: 93.4%, F1-score: 93.1%, AUC: 97.5%) of the proposed solution overcoming the competing approaches while maintaining low complexity. The proposed model obtained very promising results in comparison with the rival models. More importantly, it gives explainable outputs therefore, it can serve as a candidate tool for empowering the sustainable diagnosis of COVID-19-like diseases in smart healthcare

    A Family of Hybrid Stochastic Conjugate Gradient Algorithms for Local and Global Minimization Problems

    Get PDF
    This paper contains two main parts, Part I and Part II, which discuss the local and global minimization problems, respectively. In Part I, a fresh conjugate gradient (CG) technique is suggested and then combined with a line-search technique to obtain a globally convergent algorithm. The finite difference approximations approach is used to compute the approximate values of the first derivative of the function f. The convergence analysis of the suggested method is established. The comparisons between the performance of the new CG method and the performance of four other CG methods demonstrate that the proposed CG method is promising and competitive for finding a local optimum point. In Part II, three formulas are designed by which a group of solutions are generated. This set of random formulas is hybridized with the globally convergent CG algorithm to obtain a hybrid stochastic conjugate gradient algorithm denoted by HSSZH. The HSSZH algorithm finds the approximate value of the global solution of a global optimization problem. Five combined stochastic conjugate gradient algorithms are constructed. The performance profiles are used to assess and compare the rendition of the family of hybrid stochastic conjugate gradient algorithms. The comparison results between our proposed HSSZH algorithm and four other hybrid stochastic conjugate gradient techniques demonstrate that the suggested HSSZH method is competitive with, and in all cases superior to, the four algorithms in terms of the efficiency, reliability and effectiveness to find the approximate solution of the global optimization problem that contains a non-convex function

    Airport terminal building capacity evaluation using queuing system

    No full text
    Queues or waiting lines are a natural occurrence in the everyday lives of consumers and the process of every business. Being customers\u27 first point of contact with the business, a customer’s experience in the queue becomes a determining factor of their first impression of the business. Queuing provides the cornerstone of efficiency to businesses as they assist employees and managers in tracking, prioritising, and ensuring the delivery of services and transactions. Inefficiencies in queues are undesired as they can result in substantial losses to a business, such as bad reputation and loss of customers due to balking or reneging behaviour. Previous research has shown that the application of queuing theory enables businesses to analyse their queuing system and the trends of demand for their services. This allows the business to effectively identify measures to improve their queuing system and serve demand at the desired level of service. In this paper, where it examined Cairo International Airport (CAI)’s existing departure queue system and benchmarked it against the optimum wait time suggested in the International Air Transport Association’s (IATA) Level of Service (LoS) concept. The application of Kendall-Lee’s Notation is applied to describe the existing queuing system of the airport. It also suggested areas of improvement after the analysis and recommended that the CAI focus on improving service time for its Check-in, Security, and Boarding process. Although the Immigration process has met IATA’s recommended optimal wait time, training could be provided to employees to enable them to progressively work towards a better service time and prepare for the future higher traffic volume. The team has also suggested for CAI to introduce autonomous technology for the departure process and software which analyses passenger flow and projects a forecast value based on the airport growth trends
    corecore